A Study of Coppersmith ' S Block Wiedemannalgorithm Using Matrix Polynomialsgilles
نویسنده
چکیده
Nous analysons un algorithme probabiliste par blocs, propos e par Coppersmith, pour la r esolution de grands syst emes lin eaires creux Aw = 0 donn es sur un corps ni K =GF(q). Cet algorithme est une modiication d'un algorithme de Wiedemann. Coppersmith a justii e sa m ethode par plusieurs arguments heuristiques. Une question ouverte etait de d emontrer que l'algorithme peut eeectivement fournir une solution, avec une probabilit e strictement plus grande que z ero, sur les corps de petite cardi-nalit e tels que K =GF(2). Nous r epondons a la question a peu pr es compl etement. L'algorithme utilise deux matrices X et Y de dimensions m N et N n. Sur un corps ni quelconque, nous expliquons comment les param etres m et n peuvent ^ etre r egl es de mani ere a ce que, pour un syst eme quelconque en entr ee, une solution soit produite avec une bonne probabilit e. Inversement, pour certains syst emes particuliers dits non pathologiques, nous montrons que les contraintes sur m et n peuvent ^ etre rel^ ach ees en continuant d'assurer une bonne probabilit e de r eussite. Accessoirement, nous am eliorons une borne donn ee par Kaltofen dans les cas des corps nis a grands nombres d' el ements. Ennn, pour rester complets dans notre g en eralisation des etudes de Wiedemann au cas matriciel, nous esquissons rapidement une version d eterministe de l'algorithme par blocs. Abstract. We analyse a randomized block algorithm proposed by Copper-smith for solving large sparse systems of linear equations, Aw = 0, over a nite eld K =GF(q). It is a modiication of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was an open question to prove that it may produce a solution, with positive probability, for small nite elds e.g. for K =GF(2). We answer this question nearly completely. The algorithm uses two random matrices X and Y of dimensions m N and N n. Over any nite eld, we show how the parameters m and n of the algorithm may be tuned so that, for any input system, a solution is computed with high probability. Conversely, for certain particular input systems, we show that the conditions on the input parameters may be relaxed to ensure the success. We also improve the probability bound of Kaltofen in the case of large …
منابع مشابه
A Block-Wise random sampling approach: Compressed sensing problem
The focus of this paper is to consider the compressed sensing problem. It is stated that the compressed sensing theory, under certain conditions, helps relax the Nyquist sampling theory and takes smaller samples. One of the important tasks in this theory is to carefully design measurement matrix (sampling operator). Most existing methods in the literature attempt to optimize a randomly initiali...
متن کاملPlethysm and fast matrix multiplication
Motivated by the symmetric version of matrix multiplication we study the plethysm $S^k(\mathfrak{sl}_n)$ of the adjoint representation $\mathfrak{sl}_n$ of the Lie group $SL_n$. In particular, we describe the decomposition of this representation into irreducible components for $k=3$, and find highest weight vectors for all irreducible components. Relations to fast matrix multiplication, in part...
متن کاملSolving Homogeneous Linear Equations over Gf(2) via Block Wiedemann Algorithm
We propose a method of solving large sparse systems of homogeneous linear equations over GF(2), the field with two elements. We modify an algorithm due to Wiedemann. A block version of the algorithm allows us to perform 32 matrix-vector operations for the cost of one. The resulting algorithm is competitive with structured Gaussian elimination in terms of time and has much lower space requiremen...
متن کاملImproved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor
In the past few years, successive improvements of the asymptotic complexity of square matrix multiplication have been obtained by developing novel methods to analyze the powers of the Coppersmith-Winograd tensor, a basic construction introduced thirty years ago. In this paper we show how to generalize this approach to make progress on the complexity of rectangular matrix multiplication as well,...
متن کاملOn Degeneration of Tensors and Algebras
An important building block in all current asymptotically fast algorithms for matrix multiplication are tensors with low border rank, that is, tensors whose border rank is equal or very close to their size. To find new asymptotically fast algorithms for matrix multiplication, it seems to be important to understand those tensors whose border rank is as small as possible, so called tensors of min...
متن کامل